Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Psychol Music ; 52(3): 305-321, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38708378

RESUMO

Music that evokes strong emotional responses is often experienced as autobiographically salient. Through emotional experience, the musical features of songs could also contribute to their subjective autobiographical saliency. Songs which have been popular during adolescence or young adulthood (ages 10-30) are more likely to evoke stronger memories, a phenomenon known as a reminiscence bump. In the present study, we sought to determine how song-specific age, emotional responsiveness to music, musical features, and subjective memory functioning contribute to the subjective autobiographical saliency of music in older adults. In a music listening study, 112 participants rated excerpts of popular songs from the 1950s to the 1980s for autobiographical saliency. Additionally, they filled out questionnaires about emotional responsiveness to music and subjective memory functioning. The song excerpts' musical features were extracted computationally using MIRtoolbox. Results showed that autobiographical saliency was best predicted by song-specific age and emotional responsiveness to music and musical features. Newer songs that were more similar in rhythm to older songs were also rated higher in autobiographical saliency. Overall, this study contributes to autobiographical memory research by uncovering a set of factors affecting the subjective autobiographical saliency of music.

2.
Ann N Y Acad Sci ; 1530(1): 18-22, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37847675

RESUMO

Music listening is a dynamic process that entails complex interactions between sensory, cognitive, and emotional processes. The naturalistic paradigm provides a means to investigate these processes in an ecologically valid manner by allowing experimental settings that mimic real-life musical experiences. In this paper, we highlight the importance of the naturalistic paradigm in studying dynamic music processing and discuss how it allows for investigating both the segregation and integration of brain processes using model-based and model-free methods. We further suggest that studying individual difference-modulated music processing in this paradigm can provide insights into the mechanisms of brain plasticity, which can have implications for the development of interventions and therapies in a personalized way. Finally, despite the challenges that the naturalistic paradigm poses, we end with a discussion on future prospects of music and neuroscience research, especially with the continued development and refinement of naturalistic paradigms and the adoption of open science practices.


Assuntos
Mapeamento Encefálico , Música , Humanos , Mapeamento Encefálico/métodos , Percepção Auditiva , Imageamento por Ressonância Magnética , Encéfalo
3.
Cogn Sci ; 47(4): e13281, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-37096347

RESUMO

Body movement is a primary nonverbal communication channel in humans. Coordinated social behaviors, such as dancing together, encourage multifarious rhythmic and interpersonally coupled movements from which observers can extract socially and contextually relevant information. The investigation of relations between visual social perception and kinematic motor coupling is important for social cognition. Perceived coupling of dyads spontaneously dancing to pop music has been shown to be highly driven by the degree of frontal orientation between dancers. The perceptual salience of other aspects, including postural congruence, movement frequencies, time-delayed relations, and horizontal mirroring remains, however, uncertain. In a motion capture study, 90 participant dyads moved freely to 16 musical excerpts from eight musical genres, while their movements were recorded using optical motion capture. A total from 128 recordings from 8 dyads maximally facing each other were selected to generate silent 8-s animations. Three kinematic features describing simultaneous and sequential full body coupling were extracted from the dyads. In an online experiment, the animations were presented to 432 observers, who were asked to rate perceived similarity and interaction between dancers. We found dyadic kinematic coupling estimates to be higher than those obtained from surrogate estimates, providing evidence for a social dimension of entrainment in dance. Further, we observed links between perceived similarity and coupling of both slower simultaneous horizontal gestures and posture bounding volumes. Perceived interaction, on the other hand, was more related to coupling of faster simultaneous gestures and to sequential coupling. Also, dyads who were perceived as more coupled tended to mirror their pair's movements.


Assuntos
Gestos , Música , Humanos , Comportamento Imitativo , Movimento , Postura , Percepção Visual
4.
PLoS One ; 17(9): e0275228, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36174020

RESUMO

Previous literature has shown that music preferences (and thus preferred musical features) differ depending on the listening context and reasons for listening (RL). Yet, to our knowledge no research has investigated how features of music that people dance or move to relate to particular RL. Consequently, in two online surveys, participants (N = 173) were asked to name songs they move to ("dance music"). Additionally, participants (N = 105) from Survey 1 provided RL for their selected songs. To investigate relationships between the two, we first extracted audio features from dance music using the Spotify API and compared those features with a baseline dataset that is considered to represent music in general. Analyses revealed that, compared to the baseline, the dance music dataset had significantly higher levels of energy, danceability, valence, and loudness, and lower speechiness, instrumentalness and acousticness. Second, to identify potential subgroups of dance music, a cluster analysis was performed on its Spotify audio features. Results of this cluster analysis suggested five subgroups of dance music with varying combinations of Spotify audio features: "fast-lyrical", "sad-instrumental", "soft-acoustic", "sad-energy", and "happy-energy". Third, a factor analysis revealed three main RL categories: "achieving self-awareness", "regulation of arousal and mood", and "expression of social relatedness". Finally, we identified variations in people's RL ratings for each subgroup of dance music. This suggests that certain characteristics of dance music are more suitable for listeners' particular RL, which shape their music preferences. Importantly, the highest-rated RL items for dance music belonged to the "regulation of mood and arousal" category. This might be interpreted as the main function of dance music. We hope that future research will elaborate on connections between musical qualities of dance music and particular music listening functions.


Assuntos
Meios de Comunicação , Música , Acústica , Percepção Auditiva , Auscultação , Humanos
5.
Sci Rep ; 12(1): 2672, 2022 02 17.
Artigo em Inglês | MEDLINE | ID: mdl-35177683

RESUMO

Movement is a universal response to music, with dance often taking place in social settings. Although previous work has suggested that socially relevant information, such as personality and gender, are encoded in dance movement, the generalizability of previous work is limited. The current study aims to decode dancers' gender, personality traits, and music preference from music-induced movements. We propose a method that predicts such individual difference from free dance movements, and demonstrate the robustness of the proposed method by using two data sets collected using different musical stimuli. In addition, we introduce a novel measure to explore the relative importance of different joints in predicting individual differences. Results demonstrated near perfect classification of gender, and notably high prediction of personality and music preferences. Furthermore, learned models demonstrated generalizability across datasets highlighting the importance of certain joints in intrinsic movement patterns specific to individual differences. Results further support theories of embodied music cognition and the role of bodily movement in musical experiences by demonstrating the influence of gender, personality, and music preferences on embodied responses to heard music.

6.
Hum Mov Sci ; 81: 102894, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34798445

RESUMO

Humans are able to synchronize with musical events whilst coordinating their movements with others. Interpersonal entrainment phenomena, such as dance, involve multiple body parts and movement directions. Along with being multidimensional, dance movement interaction is plurifrequential, since it can occur at different frequencies simultaneously. Moreover, it is prone to nonstationarity, due to, for instance, displacements around the dance floor. Various methodological approaches have been adopted for the study of human entrainment, but only spectrogram-based techniques allow for an integral analysis thereof. This article proposes an alternative approach based upon the cross-wavelet transform, a state-of-the-art technique for nonstationary and plurifrequential analysis of univariate interaction. The presented approach generalizes the cross-wavelet transform to multidimensional signals. It allows to identify, for different frequencies of movement, estimates of interaction and leader-follower dynamics across body parts and movement directions. Further, the generalized cross-wavelet transform can be used to quantify the frequency-wise contribution of individual body parts and movement directions to overall movement synchrony. Since both in- and anti-phase relationships are dominant modes of coordination, the proposed implementation ignores whether movements are identical or opposite in phase. The article provides a thorough mathematical description of the method and includes proofs of its invariance under translation, rotation, and reflection. Finally, its properties and performance are illustrated via four examples using simulated data and behavioral data collected through a mirror game task and a free dance movement task.


Assuntos
Movimento , Análise de Ondaletas , Humanos
7.
Front Psychol ; 12: 647756, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34017286

RESUMO

Although music is known to be a part of everyday life and a resource for mood and emotion management, everyday life has changed significantly for many due to the global coronavirus pandemic, making the role of music in everyday life less certain. An online survey in which participants responded to Likert scale questions as well as providing free text responses was used to explore how participants were engaging with music during the first wave of the pandemic, whether and how they were using music for mood regulation, and how their engagement with music related to their experiences of worry and anxiety resulting from the pandemic. Results indicated that, for the majority of participants, while many felt their use of music had changed since the beginning of the pandemic, the amount of their music listening behaviors were either unaffected by the pandemic or increased. This was especially true of listening to self-selected music and watching live streamed concerts. Analysis revealed correlations between participants' use of mood for music regulation, their musical engagement, and their levels of anxiety and worry. A small number of participants described having negative emotional responses to music, the majority of whom also reported severe levels of anxiety.

8.
PLoS One ; 16(5): e0251692, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33989366

RESUMO

BACKGROUND AND OBJECTIVES: Music has a unique capacity to evoke both strong emotions and vivid autobiographical memories. Previous music information retrieval (MIR) studies have shown that the emotional experience of music is influenced by a combination of musical features, including tonal, rhythmic, and loudness features. Here, our aim was to explore the relationship between music-evoked emotions and music-evoked memories and how musical features (derived with MIR) can predict them both. METHODS: Healthy older adults (N = 113, age ≥ 60 years) participated in a listening task in which they rated a total of 140 song excerpts comprising folk songs and popular songs from 1950s to 1980s on five domains measuring the emotional (valence, arousal, emotional intensity) and memory (familiarity, autobiographical salience) experience of the songs. A set of 24 musical features were extracted from the songs using computational MIR methods. Principal component analyses were applied to reduce multicollinearity, resulting in six core musical components, which were then used to predict the behavioural ratings in multiple regression analyses. RESULTS: All correlations between behavioural ratings were positive and ranged from moderate to very high (r = 0.46-0.92). Emotional intensity showed the highest correlation to both autobiographical salience and familiarity. In the MIR data, three musical components measuring salience of the musical pulse (Pulse strength), relative strength of high harmonics (Brightness), and fluctuation in the frequencies between 200-800 Hz (Low-mid) predicted both music-evoked emotions and memories. Emotional intensity (and valence to a lesser extent) mediated the predictive effect of the musical components on music-evoked memories. CONCLUSIONS: The results suggest that music-evoked emotions are strongly related to music-evoked memories in healthy older adults and that both music-evoked emotions and memories are predicted by the same core musical features.


Assuntos
Estimulação Acústica , Emoções/fisiologia , Memória Episódica , Rememoração Mental/fisiologia , Música , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
10.
Int J Neural Syst ; 31(3): 2150001, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33353528

RESUMO

To examine the electrophysiological underpinnings of the functional networks involved in music listening, previous approaches based on spatial independent component analysis (ICA) have recently been used to ongoing electroencephalography (EEG) and magnetoencephalography (MEG). However, those studies focused on healthy subjects, and failed to examine the group-level comparisons during music listening. Here, we combined group-level spatial Fourier ICA with acoustic feature extraction, to enable group comparisons in frequency-specific brain networks of musical feature processing. It was then applied to healthy subjects and subjects with major depressive disorder (MDD). The music-induced oscillatory brain patterns were determined by permutation correlation analysis between individual time courses of Fourier-ICA components and musical features. We found that (1) three components, including a beta sensorimotor network, a beta auditory network and an alpha medial visual network, were involved in music processing among most healthy subjects; and that (2) one alpha lateral component located in the left angular gyrus was engaged in music perception in most individuals with MDD. The proposed method allowed the statistical group comparison, and we found that: (1) the alpha lateral component was activated more strongly in healthy subjects than in the MDD individuals, and that (2) the derived frequency-dependent networks of musical feature processing seemed to be altered in MDD participants compared to healthy subjects. The proposed pipeline appears to be valuable for studying disrupted brain oscillations in psychiatric disorders during naturalistic paradigms.


Assuntos
Transtorno Depressivo Maior , Música , Percepção Auditiva , Encéfalo , Mapeamento Encefálico , Depressão , Eletroencefalografia , Humanos
11.
Brain Topogr ; 33(3): 289-302, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32124110

RESUMO

Recently, exploring brain activity based on functional networks during naturalistic stimuli especially music and video represents an attractive challenge because of the low signal-to-noise ratio in collected brain data. Although most efforts focusing on exploring the listening brain have been made through functional magnetic resonance imaging (fMRI), sensor-level electro- or magnetoencephalography (EEG/MEG) technique, little is known about how neural rhythms are involved in the brain network activity under naturalistic stimuli. This study exploited cortical oscillations through analysis of ongoing EEG and musical feature during freely listening to music. We used a data-driven method that combined music information retrieval with spatial Fourier Independent Components Analysis (spatial Fourier-ICA) to probe the interplay between the spatial profiles and the spectral patterns of the brain network emerging from music listening. Correlation analysis was performed between time courses of brain networks extracted from EEG data and musical feature time series extracted from music stimuli to derive the musical feature related oscillatory patterns in the listening brain. We found brain networks of musical feature processing were frequency-dependent. Musical feature time series, especially fluctuation centroid and key feature, were associated with an increased beta activation in the bilateral superior temporal gyrus. An increased alpha oscillation in the bilateral occipital cortex emerged during music listening, which was consistent with alpha functional suppression hypothesis in task-irrelevant regions. We also observed an increased delta-beta oscillatory activity in the prefrontal cortex associated with musical feature processing. In addition to these findings, the proposed method seems valuable for characterizing the large-scale frequency-dependent brain activity engaged in musical feature processing.


Assuntos
Percepção Auditiva , Mapeamento Encefálico , Música , Encéfalo/diagnóstico por imagem , Eletroencefalografia , Humanos
12.
J Neurosci Methods ; 330: 108502, 2020 01 15.
Artigo em Inglês | MEDLINE | ID: mdl-31730873

RESUMO

BACKGROUND: Ongoing EEG data are recorded as mixtures of stimulus-elicited EEG, spontaneous EEG and noises, which require advanced signal processing techniques for separation and analysis. Existing methods cannot simultaneously consider common and individual characteristics among/within subjects when extracting stimulus-elicited brain activities from ongoing EEG elicited by 512-s long modern tango music. NEW METHOD: Aiming to discover the commonly music-elicited brain activities among subjects, we provide a comprehensive framework based on fast double-coupled nonnegative tensor decomposition (FDC-NTD) algorithm. The proposed algorithm with a generalized model is capable of simultaneously decomposing EEG tensors into common and individual components. RESULTS: With the proposed framework, the brain activities can be effectively extracted and sorted into the clusters of interest. The proposed algorithm based on the generalized model achieved higher fittings and stronger robustness. In addition to the distribution of centro-parietal and occipito-parietal regions with theta and alpha oscillations, the music-elicited brain activities were also located in the frontal region and distributed in the 4∼11 Hz band. COMPARISON WITH EXISTING METHOD(S): The present study, by providing a solution of how to separate common stimulus-elicited brain activities using coupled tensor decomposition, has shed new light on the processing and analysis of ongoing EEG data in multi-subject level. It can also reveal more links between brain responses and the continuous musical stimulus. CONCLUSIONS: The proposed framework based on coupled tensor decomposition can be successfully applied to group analysis of ongoing EEG data, as it can be reliably inferred that those brain activities we obtained are associated with musical stimulus.


Assuntos
Algoritmos , Encéfalo/fisiologia , Eletroencefalografia/métodos , Neuroimagem Funcional/métodos , Processamento de Sinais Assistido por Computador , Adulto , Percepção Auditiva/fisiologia , Humanos , Pessoa de Meia-Idade , Música , Adulto Jovem
13.
Neuroimage ; 216: 116191, 2020 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-31525500

RESUMO

Keeping time is fundamental for our everyday existence. Various isochronous activities, such as locomotion, require us to use internal timekeeping. This phenomenon comes into play also in other human pursuits such as dance and music. When listening to music, we spontaneously perceive and predict its beat. The process of beat perception comprises both beat inference and beat maintenance, their relative importance depending on the salience of beat in the music. To study functional connectivity associated with these processes in a naturalistic situation, we used functional magnetic resonance imaging to measure brain responses of participants while they were listening to a piece of music containing strong contrasts in beat salience. Subsequently, we utilized dynamic graph analysis and psychophysiological interactions (PPI) analysis in connection with computational modelling of beat salience to investigate how functional connectivity manifests these processes. As the main effect, correlation analyses between the obtained dynamic graph measures and the beat salience measure revealed increased centrality in auditory-motor cortices, cerebellum, and extrastriate visual areas during low beat salience, whereas regions of the default mode- and central executive networks displayed high centrality during high beat salience. PPI analyses revealed partial dissociation of functional networks belonging to this pathway indicating complementary neural mechanisms crucial in beat inference and maintenance, processes pivotal for extracting and predicting temporal regularities in our environment.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Cerebelo/fisiologia , Conectoma/psicologia , Córtex Motor/fisiologia , Música/psicologia , Estimulação Acústica/métodos , Adulto , Córtex Auditivo/diagnóstico por imagem , Cerebelo/diagnóstico por imagem , Conectoma/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Córtex Motor/diagnóstico por imagem , Periodicidade , Adulto Jovem
14.
Sci Rep ; 9(1): 15594, 2019 10 30.
Artigo em Inglês | MEDLINE | ID: mdl-31666586

RESUMO

We investigated the relationships between perceptions of similarity and interaction in spontaneously dancing dyads, and movement features extracted using novel computational methods. We hypothesized that dancers' movements would be perceived as more similar when they exhibited spatially and temporally comparable movement patterns, and as more interactive when they spatially oriented more towards each other. Pairs of dancers were asked to move freely to two musical excerpts while their movements were recorded using optical motion capture. Subsequently, in two separate perceptual experiments we presented stick figure animations of the dyads to observers, who rated degree of interaction and similarity between dancers. Mean perceptual ratings were compared with three different approaches for quantifying coordination: torso orientation, temporal coupling, and spatial coupling. Correlations and partial correlations across dyads were computed between each estimate and the perceptual measures. A systematic exploration showed that torso orientation (dancers facing more towards each other) is a strong predictor of perceived interaction even after controlling for other features, whereas temporal and spatial coupling (dancers moving similarly in space and in time) are better predictors for perceived similarity. Further, our results suggest that similarity is a necessary but not sufficient condition for interaction.

15.
Atten Percept Psychophys ; 81(7): 2461-2472, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31062302

RESUMO

For both musicians and music psychologists, beat rate (BPM) has often been regarded as a transparent measure of musical speed or tempo, yet recent research has shown that tempo is more than just BPM. In a previous study, London, Burger, Thompson, and Toiviainen (Acta Psychologica, 164, 70-80, 2016) presented participants with original as well as "time-stretched" versions of classic R&B songs; time stretching slows down or speeds up a recording without changing its pitch or timbre. In that study we discovered a tempo anchoring effect (TAE): Although relative tempo judgments (original vs. time-stretched versions of the same song) were correct, they were at odds with BPM rates of each stimulus. As previous studies have shown that synchronous movement enhances rhythm perception, we hypothesized that tapping along to the beat of these songs would reduce or eliminate the TAE and increase the salience of the beat rate of each stimulus. In the current study participants were presented with the London et al. (Acta Psychologica, 164, 70-80, 2016) stimuli in nonmovement and movement conditions. We found that although participants were able to make BPM-based tempo judgments of generic drumming patterns, and were able to tap along to the R&B stimuli at the correct beat rates, the TAE persisted in both movement and nonmovement conditions. Thus, contrary to our hypothesis that movement would reduce or eliminate the TAE, we found a disjunction between correctly synchronized motor behavior and tempo judgment. The implications of the tapping-TAE dissociation in the broader context of tempo and rhythm perception are discussed, and further approaches to studying the TAE-tapping dissociation are suggested.


Assuntos
Percepção Auditiva/fisiologia , Dedos/fisiologia , Julgamento/fisiologia , Movimento/fisiologia , Música/psicologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Movimento (Física) , Fatores de Tempo , Adulto Jovem
16.
PLoS One ; 14(5): e0216499, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31051008

RESUMO

Learning, attention and action play a crucial role in determining how stimulus predictions are formed, stored, and updated. Years-long experience with the specific repertoires of sounds of one or more musical styles is what characterizes professional musicians. Here we contrasted active experience with sounds, namely long-lasting motor practice, theoretical study and engaged listening to the acoustic features characterizing a musical style of choice in professional musicians with mainly passive experience of sounds in laypersons. We hypothesized that long-term active experience of sounds would influence the neural predictions of the stylistic features in professional musicians in a distinct way from the mainly passive experience of sounds in laypersons. Participants with different musical backgrounds were recruited: professional jazz and classical musicians, amateur musicians and non-musicians. They were presented with a musical multi-feature paradigm eliciting mismatch negativity (MMN), a prediction error signal to changes in six sound features for only 12 minutes of electroencephalography (EEG) and magnetoencephalography (MEG) recordings. We observed a generally larger MMN amplitudes-indicative of stronger automatic neural signals to violated priors-in jazz musicians (but not in classical musicians) as compared to non-musicians and amateurs. The specific MMN enhancements were found for spectral features (timbre, pitch, slide) and sound intensity. In participants who were not musicians, the higher preference for jazz music was associated with reduced MMN to pitch slide (a feature common in jazz music style). Our results suggest that long-lasting, active experience of a musical style is associated with accurate neural priors for the sound features of the preferred style, in contrast to passive listening.


Assuntos
Estimulação Acústica/métodos , Percepção Sonora/fisiologia , Percepção da Altura Sonora/fisiologia , Adulto , Eletroencefalografia , Feminino , Humanos , Magnetoencefalografia , Masculino , Música , Adulto Jovem
17.
PLoS One ; 13(4): e0196065, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29672597

RESUMO

Expertise in music has been investigated for decades and the results have been applied not only in composition, performance and music education, but also in understanding brain plasticity in a larger context. Several studies have revealed a strong connection between auditory and motor processes and listening to and performing music, and music imagination. Recently, as a logical next step in music and movement, the cognitive and affective neurosciences have been directed towards expertise in dance. To understand the versatile and overlapping processes during artistic stimuli, such as music and dance, it is necessary to study them with continuous naturalistic stimuli. Thus, we used long excerpts from the contemporary dance piece Carmen presented with and without music to professional dancers, musicians, and laymen in an EEG laboratory. We were interested in the cortical phase synchrony within each participant group over several frequency bands during uni- and multimodal processing. Dancers had strengthened theta and gamma synchrony during music relative to silence and silent dance, whereas the presence of music decreased systematically the alpha and beta synchrony in musicians. Laymen were the only group of participants with significant results related to dance. Future studies are required to understand whether these results are related to some other factor (such as familiarity to the stimuli), or if our results reveal a new point of view to dance observation and expertise.


Assuntos
Córtex Cerebral/fisiologia , Dança , Música , Estimulação Acústica , Adulto , Ondas Encefálicas , Fenômenos Eletrofisiológicos , Feminino , Humanos , Masculino , Adulto Jovem
18.
J Neurosci Methods ; 303: 1-6, 2018 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-29596859

RESUMO

BACKGROUND: There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. RESULTS: The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. COMPARISON WITH EXISTING METHOD: Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. CONCLUSIONS: Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Neurociência Cognitiva/métodos , Música , Adulto , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Análise de Componente Principal , Adulto Jovem
19.
Sci Rep ; 8(1): 2266, 2018 02 02.
Artigo em Inglês | MEDLINE | ID: mdl-29396524

RESUMO

Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.


Assuntos
Estimulação Acústica , Córtex Auditivo/fisiologia , Imageamento por Ressonância Magnética , Música , Adulto , Feminino , Voluntários Saudáveis , Humanos , Masculino , Modelos Neurológicos , Análise Espaço-Temporal , Adulto Jovem
20.
Eur J Neurosci ; 47(5): 433-445, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29359365

RESUMO

When watching performing arts, a wide and complex network of brain processes emerge. These processes can be shaped by professional expertise. When compared to laymen, dancers have enhanced processes in observation of short dance movement and listening to music. But how do the cortical processes differ in musicians and dancers when watching an audio-visual dance performance? In our study, we presented the participants long excerpts from the contemporary dance choreography of Carmen. During multimodal movement of a dancer, theta phase synchrony over the fronto-central electrodes was stronger in dancers when compared to musicians and laymen. In addition, alpha synchrony was decreased in all groups during large rapid movement when compared to nearly motionless parts of the choreography. Our results suggest an enhanced cortical communication in dancers when watching dance and, further, that this enhancement is rather related to multimodal, cognitive and emotional processes than to simple observation of dance movement.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Dança , Emoções/fisiologia , Movimento/fisiologia , Adulto , Feminino , Humanos , Masculino , Música
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...